Compiling Communication-eecient Programs for Massively Parallel Machines. Ieee on Compiling Array Expressions for Eecient Execution on Distributed Memory Machines. in 5.4 Compiler Optimizations 5.3 Multigrid Code 5.2 Multiblock Code
نویسندگان
چکیده
26] Max Lemke and Daniel Quinlan. P++, a C++ virtual shared grids based programming environment for architecture-independent development of structured grid applications.grid representation of emission source clusters in regional air quality modeling. 31] R. Meakin. Moving body overset grid methods for complete aircraft tiltrotor simulations, AIAA-93-3350. A multiscale nite element pollutant transport scheme for urban and regional modeling. The improved robustness of multigrid elliptic solvers based on multiple semicoarsened grids. 37] James M. Stichnoth. EEcient compilation of array statements for private memory multicomputers.2d: A radiation magnetohydrodynamics code for astrophysical ows in two space dimensions: I. the hydrodynamic algorithms and tests. Development of a exible and eecient multigrid-based multiblock ow solver; AIAA-93-0677. Distributed memory compiler methods for irregular problems-data copy reuse and runtime partitioning. 14] Survey of principal investigators of grand challenge applications: Workshop on grand challenge applications and software technology, May 1993. Global land cover classiication by remote sensing:present capabilities and future possibilities. 28 such an application by hand, it is very diicult to analyze the exact communication required and then to be able to use the communication routines provided by the machine to communicate eeciently. Our runtime and compiler support can be used to parallelize such applications conveniently. Our experimental results have shown that the code parallelized by using the compiler will have only a small overhead as compared to the best hand parallelized code (i.e. parallelized by invoking system's communication primitives by hand). While the design of our runtime system was motivated by multiblock and multigrid applications, our runtime primitives can be used in many cases for regular codes as well. We therefore, believe that our runtime support and compiler techniques can be used by compilers of HPF-style parallel programming languages in general. and Zeki Bozkus for many enlightening discussions and for allowing us to integrate our runtime support into their emerging Fortran 90D compiler. The detailed discussions we had with Sanjay Ranka, Alok Choudhary and Zeki Bozkus during their visits to Maryland were extremely productive. We are also grateful to V. Vatsa and M. Senetrik at NASA Langley for giving us access to the multiblock TLNS3D application code. We will also like to thank John van Rosendale at ICASE and Andrea Overman at NASA Langley for making their sequential and hand parallelized multigrid code available to us. We also thank Jim Humphries for creating a portable version of the runtime library.erating local addresses and communication sets for …
منابع مشابه
Compiling Array Expressions for Efficient Execution on Distributed-Memory Machines
Array statements are often used to express data-parallelism in scientiic languages such as Fortran 90 and High Performance Fortran. In compiling array statements for a distributed-memory machine, eecient generation of communication sets and local index sets is important. We show that for arrays distributed block-cyclically on multiple processors, the local memory access sequence and communicati...
متن کاملAn Integrated Runtime and Compile-time Approach for Parallelizing Structured and Block Structured Applications an Integrated Runtime and Compile-time Approach for Parallelizing Structured and Block Structured Applications
Scientiic and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). In this paper, we present a combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an eecient and machine-independe...
متن کاملCompiler Optimizations for Fortran D on Mimd Distributed-memory Machines Compiler Optimizations for Fortran D on Mimd Distributed-memory Machines
Massively parallel MIMD distributed-memory machines can provide enormous computation power. However, the diiculty of developing parallel programs for these machines has limited their accessibility. This paper presents compiler algorithms to automatically derive eecient message-passing programs based on data decompositions. Optimizations are presented to minimize load imbalance and communication...
متن کاملCompiling Array Statements for E cient Execution onDistributed - Memory Machines : Two - level
In languages such as High Performance Fortran (HPF), array statements are used for expressing data parallelism. In compiling array statements for distributed-memory machines, eecient enumeration of local index sets and communication sets is important. The virtual processor approach, among several other methods, has been proposed for eecient enumeration of these index sets. In this paper, using ...
متن کاملChapter 1 an Overview of the Suif Compiler for Scalable Parallel Machines
We are building a compiler that automatically translates sequential scientiic programs into parallel code for scalable parallel machines. Many of the compiler techniques needed to generate correct and eecient code are common across all scalable machines, regardless of whether its address space is shared or distributed. This paper describes the structure of the compiler, emphasizing the common a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995